AI-Assisted Stakeholder Proposals for Engineering Leaders
Learn how engineering leaders use AI to draft stronger proposals, roadmaps, and ROI narratives without losing human judgment.
AI-Assisted Stakeholder Proposals for Engineering Leaders
Engineering leaders are under pressure to communicate more clearly, more often, and with more proof than ever before. Whether you are asking for headcount, pitching platform modernization, or justifying a tooling investment, the real challenge is rarely writing alone; it is translating technical work into stakeholder language that resonates with finance, product, and executive teams. Generative AI can dramatically speed up that process, but only if it is used as a drafting partner rather than an autonomous decision-maker. That distinction matters, because AI-powered frontend generation and other AI accelerators can create polished output fast, yet polished does not automatically mean strategic, accurate, or persuasive.
This guide shows engineering managers how to use AI-assisted proposals to draft funding requests, roadmap communication, and ROI narratives while keeping human judgment, review workflows, and strategic nuance firmly in control. It is grounded in a simple principle echoed by nonprofit leadership: AI can support the process, but it cannot replace the strategy, relationships, and accountability behind the proposal. Think of it like a strong drafting engine paired with a seasoned editor. The engine helps you move faster, but the editor decides what the organization should actually say, omit, challenge, and commit to.
For teams building modern operating practices, this is part of a broader shift in how organizations adopt AI responsibly. It connects to everything from cloud strategy shift and business automation to how teams evaluate AI infrastructure build, lease, or outsource decisions. It also has a governance dimension: the best proposal process is not the one with the fanciest prompt, but the one that creates predictable quality, transparent assumptions, and defensible decisions.
1. Why AI-Assisted Proposals Matter Now
Stakeholder expectations have changed
Executives increasingly expect engineering leaders to do more than explain technical risks. They want a business case, a timeline, a confidence level, and a clear view of how the proposal will affect revenue, cost, reliability, or customer experience. The bar has risen because AI is compressing cycle times across the business, which means technical leaders are often asked to participate in faster planning rhythms. When that happens, the old pattern of “I’ll write the business case next week” becomes a bottleneck rather than a process.
AI-assisted proposals help teams respond with more consistency, especially when the audience wants a crisp summary first and technical detail second. This mirrors trends seen in fast-moving functions like AI-influenced B2B funnels, where the value of a message is measured by whether it moves a decision forward. Engineering proposals are no different: the proposal must be readable, credible, and tightly aligned to the decision at hand.
AI reduces drafting time, not decision complexity
One of the biggest mistakes leaders make is assuming that if AI can generate a memo in seconds, then the proposal process itself should be automated end-to-end. That is a category error. The real work in proposal writing is not sentence generation; it is prioritization, tradeoff framing, and narrative judgment. AI can summarize notes, shape the outline, and propose alternative wording, but it cannot know your org’s political context, leadership appetite, or hidden dependencies unless you explicitly provide those constraints.
This is why a human-in-the-loop model is essential. Good process design separates drafting from approval, just as a disciplined analytics team separates raw data collection from the final executive narrative. If you need a practical reference for structured data hygiene, look at GA4 migration playbooks for dev teams, which show how schema, QA, and validation create trustworthy output. Proposal workflows benefit from the same rigor.
Human judgment remains the competitive advantage
In proposal writing, judgment is what determines whether a request is timely, whether the tradeoff is framed correctly, and whether the initiative should even be proposed this quarter. AI can help articulate the case, but it cannot tell you when to stay silent, when to wait for more data, or when to re-scope the ask to something leadership can actually approve. That is especially important when you are asking for funding in a constrained environment, where every investment is competing with other strategic priorities.
Rochelle M. Jerry’s point about fundraising is directly relevant here: AI may help with the mechanics of writing, but human strategy still drives the outcome. In engineering organizations, that strategy includes sequencing the request, anticipating objections, and linking the proposal to a broader roadmap. If you want a parallel from another domain, consider how event teams revive post-launch interest; the messaging matters, but the underlying timing and audience fit matter more.
2. The Anatomy of a Strong Engineering Proposal
Start with the decision, not the document
Before you ask AI to draft anything, define the decision the audience must make. Is it approving budget, endorsing a roadmap, allocating headcount, accepting risk, or authorizing a pilot? Many weak proposals fail because they try to do all five at once. A strong proposal states the decision in one sentence, then builds evidence around that decision instead of wandering through background context.
A helpful prompt pattern is: “Draft a proposal for decision X, for audience Y, using these constraints Z.” That simple structure improves relevance dramatically. It also reflects the logic of effective commercialization documents, such as CFO implementation guides for automated credit decisioning, where the outcome, risk posture, and operating model must be explicit before the recommendation is credible.
Use a standard proposal architecture
Most engineering proposals should follow a repeatable structure: executive summary, current problem, proposed solution, alternatives considered, cost and resource estimate, expected ROI or impact, risks, and next steps. AI can draft each section, but you should preserve a single throughline: “Why now, why this, why us.” That narrative discipline helps stakeholders orient quickly and prevents the document from becoming a collection of disconnected facts.
A useful analogy comes from product and commercialization contexts. Just as launch playbooks for retail media translate product readiness into market momentum, engineering proposals must translate technical readiness into organizational momentum. The proposal should make it easy to say yes without making the risks invisible.
Separate evidence from interpretation
One of the most important habits in AI-assisted proposal writing is to distinguish between what the data says and what you infer from it. For example, the evidence might show rising incident volume in a service. The interpretation might be that the team needs a platform investment, but it could also mean the team needs better observability, a smaller scope, or a different deployment pattern. AI is excellent at producing fluent interpretations, which is precisely why humans must review them carefully.
To keep this honest, annotate the proposal with labels such as “data,” “assumption,” and “recommendation.” This format improves trust and makes the document easier to challenge productively. If your team already works with operational health metrics, you may find the discipline familiar from observability frameworks with SLOs, audit trails, and forensic readiness, where evidence must be traceable and decisions must be auditable.
3. Where Generative AI Fits in the Proposal Workflow
Use AI for first drafts, not final authority
The best use of generative AI is to compress the time between raw notes and a usable draft. Give the model your meeting notes, metrics, roadmap constraints, and desired decision, then ask for an outline followed by section-level drafts. This approach lets you retain control over framing while removing the blank-page penalty. It also prevents the common problem of overcommitting to a fully generated narrative that has not been stress-tested by a human reviewer.
One practical workflow is to have AI generate three variants: a conservative executive version, a detail-rich leadership version, and a skeptical reviewer version. This pattern is similar to how teams compare options in procurement and planning, such as premium tech buying guides or expiring flash deal strategies, where the point is not to accept the first option but to compare scenarios before choosing.
Use AI for synthesis across messy inputs
Engineering leaders often have proposal inputs scattered across Slack, Jira, incident reports, architecture notes, and one-on-one conversations. AI is highly effective at synthesizing those inputs into a coherent narrative, especially when you provide an explicit source bundle. Ask it to identify repeated themes, unresolved dependencies, and gaps in evidence. This saves time and reduces the chance that important concerns get buried in a long thread or forgotten after a planning meeting.
That synthesis function is particularly valuable for roadmap communication. A roadmap is not just a list of features; it is a story about sequencing, capacity allocation, and strategic bets. If you need a useful analogy, think about how analyst webinars become learning modules: the value comes from organizing fragments into a teachable system, not from copying the raw material verbatim.
Use AI to generate audience-specific variants
The same proposal needs different versions for different stakeholders. Finance wants return, product wants customer impact, executives want strategic alignment, and engineering peers want feasibility. AI can help rewrite the same core argument for each audience without changing the underlying facts. This is where prompt engineering becomes a leadership skill, not just a technical curiosity.
A strong prompt for this stage might say: “Rewrite this proposal for a CFO, emphasizing cost avoidance, payback period, and operational risk. Keep the facts unchanged. Do not add claims not supported by the source notes.” That kind of prompt reduces drift and improves compliance with internal review standards. For an example of careful framing in a different domain, see how teams build trust through consent capture and compliance workflows, where precision is non-negotiable.
4. Prompt Engineering for Funding Requests and ROI Narratives
Design prompts around business outcomes
Bad prompts produce generic prose. Good prompts produce documents that actually help you win support. Instead of asking AI to “write a funding request,” specify the decision, audience, constraints, and desired tone. Include the business outcome you are trying to influence, such as reduced incident cost, improved deployment frequency, lower onboarding friction, or faster feature delivery. The more concrete the outcome, the more actionable the output.
For example: “Draft a one-page funding request for a platform reliability initiative. Audience: VP Engineering and Finance partner. Goals: reduce pager load by 30%, improve weekly deployment success rate, and avoid hiring two additional support engineers. Include assumptions, risks, and a 90-day milestone plan.” This prompt forces the model to think in operating terms rather than generic benefits language.
Make ROI claims defensible
ROI narratives are where AI can be most helpful and most dangerous. Helpful, because it can structure assumptions and calculate scenarios quickly. Dangerous, because it can sound authoritative even when the underlying assumptions are weak. Your job is to ensure that every ROI claim is traceable to a source, a benchmark, or a clearly labeled estimate.
A practical rule is to present three levels of impact: hard savings, efficiency gains, and strategic options value. Hard savings might be avoided contractor spend or reduced incident hours. Efficiency gains might be faster release cycles or reduced manual coordination. Strategic options value might be the ability to ship new capabilities sooner because the team has reclaimed capacity. This layered model is more credible than a single inflated payback number, much like careful buyers evaluating product tiers in tech deal comparisons or commodity purchase tradeoffs, where the cheapest choice is not always the best one.
Use prompts to surface assumptions explicitly
One of the most useful prompt patterns is “List the assumptions behind this recommendation and flag any that are weak or missing.” This turns AI into a stress tester instead of just a writer. It can reveal when you are relying on optimistic adoption rates, vague productivity claims, or unverified time savings. That makes your eventual proposal stronger, because you can address the weak points before leadership does.
If you are familiar with validation practices in product research, the logic will feel familiar. Just as rapid consumer validation helps teams test hypotheses quickly, proposal work should test the business case before it is presented as settled truth. AI helps accelerate that pre-commitment phase, which is where many weak proposals can still be fixed.
5. Review Workflows That Keep AI Honest
Build a human-in-the-loop approval chain
Human-in-the-loop is not just a buzzword; it is the operating model that keeps AI-assisted proposals safe and strategic. At minimum, the workflow should include an initial AI draft, a human editor review, a stakeholder reality check, and an executive sign-off. Depending on the proposal’s size, you may also need legal, finance, or security review. The key is to define who can approve facts, who can approve framing, and who can approve the final ask.
Without that structure, AI can create a false sense of completion. The proposal may read smoothly, but it may still contain unsupported assumptions or omit politically sensitive tradeoffs. This is similar to how breaking-news verification workflows protect accuracy under time pressure: speed matters, but only if verification remains part of the system.
Use red-team review for sensitive proposals
For large funding asks or roadmap shifts, designate one reviewer to challenge the proposal as if they were the most skeptical stakeholder in the room. Their job is to ask: What is missing? What would make this request feel premature? What evidence would a finance leader demand? This role is especially valuable because AI-generated text often sounds more confident than the underlying evidence deserves.
Red-team review does not mean being negative for its own sake. It means improving the proposal’s survivability. A strong proposal should not collapse under reasonable objections. If your team has experience with audit-heavy workflows, you may recognize the discipline from LLM auditing frameworks, which emphasize cumulative risk rather than one-off errors.
Preserve version control and decision history
Every meaningful proposal should have versioned drafts and a short change log that records what changed and why. This is especially important when AI is involved, because stakeholders need to know whether a sentence is a human revision, an AI suggestion, or a negotiated compromise. Version history also helps teams learn which framing patterns consistently secure buy-in and which ones lead to confusion.
This is one reason engineering teams benefit from disciplined documentation habits in adjacent domains such as teardown intelligence and durability analysis. The best decisions are traceable decisions. If your proposal was revised because finance rejected a payback assumption, that should be visible in the record, not lost in a final polished draft.
6. Communicating Roadmaps with Strategic Nuance
Roadmaps are negotiation tools, not promises
Engineering roadmaps often fail when they are treated as static lists rather than evolving agreements. AI can help draft roadmap communication, but the content must reflect uncertainty, sequencing logic, and tradeoffs. A good roadmap narrative explains why certain work comes first, what dependencies exist, and which outcomes are most likely versus merely desirable. That nuance is essential for stakeholder buy-in because it makes the roadmap feel realistic instead of aspirational.
When prompting AI, ask it to preserve uncertainty and highlight dependencies. For example: “Draft a roadmap summary that distinguishes committed work, exploratory bets, and conditional items. Include the rationale for sequencing and note the assumptions that could change priorities.” That kind of prompt prevents the common failure mode where AI produces overconfident language that later creates expectation debt.
Align roadmap language with operating realities
Stakeholders are more likely to support a roadmap when they can see how it fits the organization’s capacity, risk profile, and business cycles. If a proposal aligns with a major customer renewal period, a compliance deadline, or a platform migration, say so. If it depends on a hiring plan or a tooling change, make that dependency explicit. AI can help synthesize those relationships, but the leader must decide which dependencies matter enough to surface.
This kind of alignment is familiar from sectors where timing and sequence affect adoption, such as release timing strategy or earnings-call listening workflows, where the right message at the wrong moment still underperforms. Engineering roadmap communication works the same way: timing is part of the message.
Explain tradeoffs in plain language
One of the most valuable things AI can do is translate technical tradeoffs into accessible language. But plain language should not flatten nuance. A good proposal does not say, “This improves performance.” It says, “This reduces tail latency for the top revenue-driving path, but it will delay a lower-priority refactor by one quarter.” That kind of specificity builds trust because it shows you understand the cost of the recommendation.
Stakeholder buy-in increases when people feel the leader has considered alternatives honestly. To sharpen that habit, compare it to product selection guides such as buyer’s guides beyond benchmark scores: the best choice is rarely the most impressive on a single metric. It is the best fit for the full operating context.
7. A Practical Template for AI-Assisted Proposals
Template for a funding request
Use a repeatable template so AI drafts stay consistent across requests. Start with a one-sentence decision ask, followed by the problem, impact, solution, cost, risks, and recommendation. Then add a short section titled “What changes if we do nothing?” That question is often more persuasive than a generic benefits section because it frames inaction as a choice with measurable consequences.
Below is a simple structure you can adapt:
- Decision requested: Approve budget for X.
- Business problem: Describe the operational pain in measurable terms.
- Proposed solution: Summarize the initiative and scope.
- Expected ROI: Include hard savings, efficiency gains, and risk reduction.
- Risks and mitigations: Identify the top 3 concerns.
- Milestones: Show what success looks like in 30, 60, and 90 days.
This template is flexible enough to handle everything from tooling requests to platform investment, yet structured enough to make AI output easier to review. If you want a benchmarking mindset for your request, borrow from AI-powered security camera comparisons and AI inventory planning workflows: standardization helps people compare options quickly.
Template for roadmap communication
For roadmap updates, lead with outcomes and sequencing. Include three layers: what is shipping, why it matters, and what must be true for it to land on time. Avoid over-indexing on feature names if the audience cares more about customer outcomes or team capacity. AI can take bullet notes from planning sessions and turn them into an executive-friendly summary, but you must preserve the logic behind prioritization.
Ask the model to write in two modes: “customer-facing clarity” and “internal planning nuance.” The first version should be understandable to non-engineers. The second should preserve dependencies, risks, and technical caveats. This dual-output approach is especially useful when you need a single source of truth that serves both leadership and delivery teams.
Template for ROI narratives
ROI narratives work best when they show a chain of causality: problem, intervention, mechanism, measurable effect. For example, “Too much manual coordination causes delayed releases; standardized automation reduces coordination overhead; reduced overhead creates more release capacity; more release capacity accelerates customer value.” That chain is more credible than claiming generic productivity gains. AI can help you write the chain, but it cannot validate the chain unless you feed it real metrics.
To improve rigor, include a table with baseline, expected change, and confidence level. This makes the narrative more honest and easier to defend. It also helps decision-makers understand whether the proposal is low-risk, medium-risk, or a strategic bet that deserves staged approval.
8. Comparison Table: Manual vs AI-Assisted Proposal Workflows
The right comparison is not “human versus AI.” It is “manual-only drafting versus human-led AI-assisted drafting with review controls.” The latter is usually faster and more scalable, but only when the workflow is designed to preserve quality.
| Workflow Element | Manual-Only Approach | AI-Assisted, Human-Led Approach | Best Practice | |
|---|---|---|---|---|
| Drafting speed | Slow, especially for first drafts | Fast initial outline and section drafts | Use AI for the first pass | |
| Strategic nuance | Usually strong if the author has time | Can be weak unless prompts are specific | Human review for framing and tradeoffs | |
| Consistency across proposals | Varies by author | High if templates and prompts are standardized | Create a proposal system, not one-off documents | |
| Evidence traceability | Depends on writer discipline | Can degrade if AI invents unsupported claims | Label data, assumptions, and recommendations separately | |
| Stakeholder tailoring | Time-consuming | Efficient multi-audience rewrites | Generate variants, then review for accuracy | |
| Governance and auditability | Often informal | Better if version control and approvals are built in | Record revisions and approvers | |
| Risk of overclaiming | Moderate | Higher if AI output is accepted uncritically | Red-team the draft before submission |
The table above captures the real tradeoff: AI improves speed and reach, but it can also amplify weak assumptions if the review workflow is sloppy. Leaders who win with AI are usually not the ones who prompt the most; they are the ones who design the best review system. That is why proposal quality depends as much on process architecture as on writing quality.
9. Common Failure Modes and How to Avoid Them
Failure mode: the proposal sounds confident but ungrounded
AI can produce polished language that disguises uncertainty. If your team allows that prose to reach leadership, you risk building trust on shaky assumptions. The antidote is to require source notes for every quantitative claim and to mark speculative language clearly. If a number cannot be defended in a meeting, it should not appear in the proposal as a fact.
Think of this as the proposal equivalent of provenance management. Just as collectors protect certificates and purchase records, leaders should protect the evidence trail behind their asks. The goal is not just persuasion; it is defensibility.
Failure mode: the workflow bypasses stakeholder discovery
Some teams use AI to draft a proposal before talking to stakeholders, then wonder why the final pitch misses the mark. The issue is not the model; it is the missing discovery phase. If you skip the listening step, you risk solving the wrong problem elegantly. Use AI after discovery to summarize what you learned, not before discovery as a substitute for it.
This is especially important for cross-functional asks. Finance, product, security, and operations each care about different risks and outcomes. A proposal that ignores those differences may be technically sound but politically weak. In this respect, stakeholder work resembles advocacy communication grounded in ethics: you must understand the audience and the responsibility that comes with influence.
Failure mode: the review loop is too slow to be useful
Teams sometimes respond to AI risk by making approval so cumbersome that the tool loses its value. That is a mistake. The answer is not endless review; it is a tiered review model. Low-risk updates can use lightweight editing, while large budget requests or strategic roadmaps go through deeper review. The point is to match control depth to decision risk.
That idea parallels how organizations treat other sensitive processes such as passkey rollouts for high-risk accounts or consent capture in marketing. Strong governance does not mean zero velocity; it means calibrated velocity.
10. A 30-Day Rollout Plan for Engineering Leaders
Week 1: define standards and prompts
Start by creating one approved proposal template and a small set of trusted prompt patterns. Define what the AI may draft, what it may not infer, and what must always be human-reviewed. This will give your team a safe starting point and reduce fear about hidden automation. It also creates consistency, which is essential if multiple managers are producing proposals.
Have each manager test the prompt set with a recent real proposal, then compare the AI-assisted version to the original. The goal is not perfection; it is to identify where the model helps and where it adds noise. Capture those findings in a shared doc so the process improves over time.
Week 2: build review checkpoints
Next, assign reviewers by decision type. A budget ask may need finance; a roadmap proposal may need product; a reliability proposal may need SRE or security. Map the review route before the draft is created, because waiting until the end to identify approvers creates bottlenecks. Use a lightweight status tracker so everyone knows where the proposal is in the workflow.
This is where internal operating discipline pays off. Teams that are already comfortable with structured workflows, such as observability-driven reliability processes, tend to adopt proposal review faster because they understand traceability and ownership.
Week 3 and 4: measure outcomes and refine
Track three metrics: time to first draft, number of review cycles, and stakeholder approval rate or decision quality. If AI reduces drafting time but increases review cycles, that is still useful data—it may mean prompts need improvement or assumptions need more grounding. Over time, the best workflows reduce both drafting effort and unnecessary back-and-forth.
Also track qualitative feedback. Ask stakeholders whether the proposal made the decision easier, whether any important nuance was missing, and whether the ROI narrative felt credible. Those comments are often more valuable than raw speed metrics because they reveal whether the communication actually improved. This is the point where AI becomes a management tool, not just a writing tool.
Conclusion: Use AI to Draft Faster, Not Think Less
AI-assisted proposals are most effective when they sharpen, rather than replace, the judgment of engineering leaders. The winning pattern is simple: use generative AI to accelerate drafting, to synthesize messy inputs, and to produce audience-specific variants, while keeping humans accountable for strategy, evidence, and approval. That balance gives you speed without sacrificing trust.
If you build the right workflow, your team can produce stronger funding requests, clearer roadmap communication, and more credible ROI narratives with less friction. If you skip the workflow and rely on the model to do the thinking, you will get polished prose and weak decisions. For more on building disciplined AI practices, explore our guides on auditing LLMs for harm, buyability metrics in AI-influenced funnels, and AI infrastructure strategy. The lesson across all of them is the same: AI is strongest when it works inside a clear operating model.
Pro Tip: Treat every AI-assisted proposal as a decision artifact, not a content artifact. If the document cannot survive skeptical review, it is not ready for leadership.
FAQ: AI-Assisted Stakeholder Proposals for Engineering Leaders
1. What should AI draft versus what should humans decide?
AI should draft outlines, summaries, alternative phrasings, and audience-specific versions of the same core message. Humans should decide the actual ask, the tradeoffs, the risk posture, and the final recommendation. In other words, AI can help you write the proposal, but it should not decide what the organization should do.
2. How do I prevent AI from inventing metrics or overstating ROI?
Use source notes, label assumptions, and require every quantitative claim to trace back to a real input or an explicitly stated estimate. Ask AI to list assumptions separately and flag any weak points. Then review the draft like a finance partner would: skeptical, precise, and evidence-driven.
3. Can AI help with roadmap communication without making it sound too generic?
Yes, if you feed it the roadmap context, audience, sequencing logic, and constraints. Ask it to preserve uncertainty, distinguish committed work from exploratory bets, and explain the “why now” behind priorities. Then edit for strategic nuance so the final version reflects actual operating reality.
4. What is the best review workflow for AI-assisted proposals?
Start with a human-led draft review, then route the proposal to the relevant cross-functional stakeholders, and finish with executive approval. For larger or riskier asks, add a red-team reviewer whose job is to challenge assumptions and identify missing evidence. Keep version history so changes and approvals are traceable.
5. How do I know if AI is actually improving proposal quality?
Measure time to first draft, number of revisions, stakeholder confidence, and whether decisions are made faster or with less ambiguity. Also ask reviewers whether the proposal was clearer, more credible, and more useful than prior versions. If AI saves time but degrades trust, the workflow needs redesign.
6. Should I use AI for every funding request?
Not necessarily. Use it when the proposal is complex, time-sensitive, or requires multiple audience versions. For highly sensitive, politically charged, or legally constrained requests, use AI more cautiously and rely on deeper human review.
Related Reading
- AI-Powered Frontend Generation: Which Tools Are Actually Ready for Enterprise Teams? - A useful companion for understanding where AI drafts are strong and where human review still matters.
- Auditing LLMs for Cumulative Harm: A Practical Framework Inspired by Nutrition Misinformation Research - Learn how to assess AI outputs for repeated, compounding risk.
- GA4 Migration Playbook for Dev Teams: Event Schema, QA and Data Validation - A strong model for disciplined validation and traceable outputs.
- Consent Capture for Marketing: Integrating eSign with Your MarTech Stack Without Breaking Compliance - A governance-first look at approvals, compliance, and operational controls.
- Observability for healthcare middleware in the cloud: SLOs, audit trails and forensic readiness - A rigorous example of auditability that proposal workflows can borrow from.
Related Topics
Jordan Ellis
Senior SEO Content Strategist
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Use Freight Signals to Time Hardware Purchases: A Data-Driven Procurement Playbook for IT Buyers
Navigating Change: How to Adjust Your Team's Tech Stack as Industry Standards Evolve
When Distros Go Dark: Managing Orphaned Spins and 'Broken' Packages at Scale
How Much RAM Should Your Developer Workstation Have in 2026? A Practical Budget
The Silent Alarm of Job Satisfaction: Understanding the Impact of Unnoticed Work Conditions
From Our Network
Trending stories across our publication group